Homework 4
| |
---|---|
0 | [q1.1] |
1 | [q1.2] |
| |
---|---|
0 | [q1.3] |
1 | [q1.4] |
| |
---|---|
0 | [q1.5] |
1 | [q1.6] |
Your answers will be evaluated to 4 decimal places.
coefficient | value |
---|---|
a | [q2.1] |
b | [q2.2] |
c | [q2.3] |
d | [q2.4] |
| |
---|---|
0 | [q2.5] |
1 | [q2.6] |
Consider this HMM.
The prior probability , dynamics model
, and sensor model
are as follows:
If the , what is the most likely explanation
The Viterbi algorithm finds the most probable sequence of hidden states , given a sequence of observations
. For the HMM structure above, which of the following probabilities are maximized by the sequence of states returned by the Viterbi algorithm? Select all correct option(s).
Consider the following graph, where W1 and W2 can be either be R or S, and I1 and I2 can be either be T or F:
The conditional probability distributions are given by:
Now we want to try approximate inference through sampling. Applying likelihood weighting, suppose we generate the following six samples given the evidence I1 = T and I2 = F:
After observing step of particle filtering, the particles and its weight are as follow:
Fill in the weighted sample distribution you used in the resampling step. Your answers will be evaluated to 4 decimal places.